877 research outputs found
Time-Critical Volume Rendering
For the past twelve months, we have conducted and completed a joint research entitled "Time- Critical Volume Rendering" with NASA Ames. As expected, High performance volume rendering algorithms have been developed by exploring some new faster rendering techniques, including object presence acceleration, parallel processing, and hierarchical level-of-detail representation. Using our new techniques, initial experiments have achieved real-time rendering rates of more than 10 frames per second of various 3D data sets with highest resolution. A couple of joint papers and technique reports as well as an interactive real-time demo have been compiled as the result of this project
Visual Simulation of Flow
We have adopted a numerical method from computational fluid dynamics, the Lattice Boltzmann Method (LBM), for real-time simulation and visualization of flow and amorphous phenomena, such as clouds, smoke, fire, haze, dust, radioactive plumes, and air-borne biological or chemical agents. Unlike other approaches, LBM discretizes the micro-physics of local interactions and can handle very complex boundary conditions, such as deep urban canyons, curved walls, indoors, and dynamic boundaries of moving objects. Due to its discrete nature, LBM lends itself to multi-resolution approaches, and its computational pattern, which is similar to cellular automata, is easily parallelizable. We have accelerated LBM on commodity graphics processing units (GPUs), achieving real-time or even accelerated real-time on a single GPU or on a GPU cluster. We have implemented a 3D urban navigation system and applied it in New York City with real-time live sensor data. In addition to a pivotal application in simulation of airborne contaminants in urban environments, this approach will enable the development of other superior prediction simulation capabilities, computer graphics and games, and a novel technology for computational science and engineering
Recommended from our members
Cube-4 - A Scalable Architecture for Real-Time Volume Rendering
We present Cube-4, a special-purpose volume rendering architecture
that is capable of rendering high-resolution (e.g., 1024^3)
datasets at 30 frames per second. The underlying algorithm, called
slice-parallel ray-casting, uses tri-linear interpolation of samples
between data slices for parallel and perspective projections. The
architecture uses a distributed interleavedmemory, several parallel
processing pipelines, and an innovative parallel dataflow scheme
that requires no global communication, except at the pixel level.
This leads to local, fixed bandwidth interconnections and has the
benefits of high memory bandwidth, real-time data input, modularity,
and scalability. We have simulated the architecture and have
implemented a working prototype of the complete hardware on a
configurable custom hardware machine. Our results indicate true
real-time performance for high-resolution datasets and linear scalability
of performance with the number of processing pipelines.Engineering and Applied Science
Recommended from our members
Sheared Interpolation and Gradient Estimation for Real-Time Volume Renderings
In this paper we present a technique for the interactive
control and display of static and dynamic 3D datasets.
We describe novel ways of tri-linear interpolation and
gradient estimation for a real-time volume rendering
system, using coherency between rays. We show simulation results that compare the proposed methods to traditional algorithms and present them in the context of
Cube-3, a special-purpose architecture capable of rendering 5123 16-bit per voxel datasets at over 20 frames per
second.Engineering and Applied Science
- …